614 research outputs found
Thermodynamic Alerter for Microbursts (TAMP)
The following subject areas are covered: microburst detection, location and measurement; thermal alerter for microbursts prototypes (TAMP); sensor-transmitters (Senstrans) design; TAMP installation; and DAPAD software
Double point self-intersection surfaces of immersions
A self-transverse immersion of a smooth manifold M^{k+2} in R^{2k+2} has a
double point self-intersection set which is the image of an immersion of a
smooth surface, the double point self-intersection surface. We prove that this
surface may have odd Euler characteristic if and only if k is congruent to 1
modulo 4 or k+1 is a power of 2. This corrects a previously published result by
Andras Szucs.
The method of proof is to evaluate the Stiefel-Whitney numbers of the double
point self-intersection surface. By earier work of the authors these numbers
can be read off from the Hurewicz image h(\alpha ) in H_{2k+2}\Omega ^{\infty
}\Sigma ^{\infty }MO(k) of the element \alpha in \pi _{2k+2}\Omega ^{\infty
}\Sigma ^{\infty }MO(k) corresponding to the immersion under the
Pontrjagin-Thom construction.Comment: 22 pages. Published copy, also available at
http://www.maths.warwick.ac.uk/gt/GTVol4/paper4.abs.htm
Bordism Groups of Immersions and Classes Represented by Self-Intersections
We prove a geometrical version of Herbert's theorem by considering the
self-intersection immersions of a self-transverse immersion up to bordism. This
generalises Herbert's theorem to additional cohomology theories and gives a
commutative diagram in the homotopy of Thom complexes. The proof uses Koschorke
and Sanderson's operations and the fact that bordism of immersions gives a
functor on the category of smooth manifolds and immersions.Comment: 16 page
DNNShifter: An Efficient DNN Pruning System for Edge Computing
Deep neural networks (DNNs) underpin many machine learning applications.
Production quality DNN models achieve high inference accuracy by training
millions of DNN parameters which has a significant resource footprint. This
presents a challenge for resources operating at the extreme edge of the
network, such as mobile and embedded devices that have limited computational
and memory resources. To address this, models are pruned to create lightweight,
more suitable variants for these devices. Existing pruning methods are unable
to provide similar quality models compared to their unpruned counterparts
without significant time costs and overheads or are limited to offline use
cases. Our work rapidly derives suitable model variants while maintaining the
accuracy of the original model. The model variants can be swapped quickly when
system and network conditions change to match workload demand. This paper
presents DNNShifter, an end-to-end DNN training, spatial pruning, and model
switching system that addresses the challenges mentioned above. At the heart of
DNNShifter is a novel methodology that prunes sparse models using structured
pruning. The pruned model variants generated by DNNShifter are smaller in size
and thus faster than dense and sparse model predecessors, making them suitable
for inference at the edge while retaining near similar accuracy as of the
original dense model. DNNShifter generates a portfolio of model variants that
can be swiftly interchanged depending on operational conditions. DNNShifter
produces pruned model variants up to 93x faster than conventional training
methods. Compared to sparse models, the pruned model variants are up to 5.14x
smaller and have a 1.67x inference latency speedup, with no compromise to
sparse model accuracy. In addition, DNNShifter has up to 11.9x lower overhead
for switching models and up to 3.8x lower memory utilisation than existing
approaches.Comment: 14 pages, 7 figures, 5 table
DNNShifter : an efficient DNN pruning system for edge computing
Funding: This research is funded by Rakuten Mobile, Japan .Deep neural networks (DNNs) underpin many machine learning applications. Production quality DNN models achieve high inference accuracy by training millions of DNN parameters which has a significant resource footprint. This presents a challenge for resources operating at the extreme edge of the network, such as mobile and embedded devices that have limited computational and memory resources. To address this, models are pruned to create lightweight, more suitable variants for these devices. Existing pruning methods are unable to provide similar quality models compared to their unpruned counterparts without significant time costs and overheads or are limited to offline use cases. Our work rapidly derives suitable model variants while maintaining the accuracy of the original model. The model variants can be swapped quickly when system and network conditions change to match workload demand. This paper presents DNNShifter  , an end-to-end DNN training, spatial pruning, and model switching system that addresses the challenges mentioned above. At the heart of DNNShifter  is a novel methodology that prunes sparse models using structured pruning - combining the accuracy-preserving benefits of unstructured pruning with runtime performance improvements of structured pruning. The pruned model variants generated by DNNShifter  are smaller in size and thus faster than dense and sparse model predecessors, making them suitable for inference at the edge while retaining near similar accuracy as of the original dense model. DNNShifter  generates a portfolio of model variants that can be swiftly interchanged depending on operational conditions. DNNShifter  produces pruned model variants up to 93x faster than conventional training methods. Compared to sparse models, the pruned model variants are up to 5.14x smaller and have a 1.67x inference latency speedup, with no compromise to sparse model accuracy. In addition, DNNShifter  has up to 11.9x lower overhead for switching models and up to 3.8x lower memory utilisation than existing approaches. DNNShifter  is available for public use from https://github.com/blessonvar/DNNShifter.Publisher PDFPeer reviewe
A family history of breast cancer will not predict female early onset breast cancer in a population-based setting
ABSTRACT: BACKGROUND: An increased risk of breast cancer for relatives of breast cancer patients has been demonstrated in many studies, and having a relative diagnosed with breast cancer at an early age is an indication for breast cancer screening. This indication has been derived from estimates based on data from cancer-prone families or from BRCA1/2 mutation families, and might be biased because BRCA1/2 mutations explain only a small proportion of the familial clustering of breast cancer. The aim of the current study was to determine the predictive value of a family history of cancer with regard to early onset of female breast cancer in a population based setting. METHODS: An unselected sample of 1,987 women with and without breast cancer was studied with regard to the age of diagnosis of breast cancer. RESULTS: The risk of early-onset breast cancer was increased when there were: (1) at least 2 cases of female breast cancer in first-degree relatives (yes/no; HR at age 30: 3.09; 95% CI: 128-7.44), (2) at least 2 cases of female breast cancer in first or second-degree relatives under the age of 50 (yes/no; HR at age 30: 3.36; 95% CI: 1.12-10.08), (3) at least 1 case of female breast cancer under the age of 40 in a first- or second-degree relative (yes/no; HR at age 30: 2.06; 95% CI: 0.83-5.12) and (4) any case of bilateral breast cancer (yes/no; HR at age 30: 3.47; 95%: 1.33-9.05). The positive predictive value of having 2 or more of these characteristics was 13% for breast cancer before the age of 70, 11% for breast cancer before the age of 50, and 1% for breast cancer before the age of 30. CONCLUSION: Applying family history related criteria in an unselected population could result in the screening of many women who will not develop breast cancer at an early age
Developing standards for reporting implementation studies of complex interventions (StaRI): a systematic review and e-Delphi
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited
Germline variation in ADAMTSL1 is associated with prognosis following breast cancer treatment in young women
To identify genetic variants associated with breast cancer prognosis we conduct a meta-analysis of overall survival (OS) and disease-free survival (DFS) in 6042 patients from four cohorts. In young women, breast cancer is characterized by a higher incidence of adverse pathological features, unique gene expression profiles and worse survival, which may relate to germline variation. To explore this hypothesis, we also perform survival analysis in 2315 patients agedPeer reviewe
- …